New Guinea
Synergetic Event Understanding: A Collaborative Approach to Cross-Document Event Coreference Resolution with Large Language Models
Min, Qingkai, Guo, Qipeng, Hu, Xiangkun, Huang, Songfang, Zhang, Zheng, Zhang, Yue
Cross-document event coreference resolution (CDECR) involves clustering event mentions across multiple documents that refer to the same real-world events. Existing approaches utilize fine-tuning of small language models (SLMs) like BERT to address the compatibility among the contexts of event mentions. However, due to the complexity and diversity of contexts, these models are prone to learning simple co-occurrences. Recently, large language models (LLMs) like ChatGPT have demonstrated impressive contextual understanding, yet they encounter challenges in adapting to specific information extraction (IE) tasks. In this paper, we propose a collaborative approach for CDECR, leveraging the capabilities of both a universally capable LLM and a task-specific SLM. The collaborative strategy begins with the LLM accurately and comprehensively summarizing events through prompting. Then, the SLM refines its learning of event representations based on these insights during fine-tuning. Experimental results demonstrate that our approach surpasses the performance of both the large and small language models individually, forming a complementary advantage. Across various datasets, our approach achieves state-of-the-art performance, underscoring its effectiveness in diverse scenarios.
A Rationale-centric Counterfactual Data Augmentation Method for Cross-Document Event Coreference Resolution
Ding, Bowen, Min, Qingkai, Ma, Shengkun, Li, Yingjie, Yang, Linyi, Zhang, Yue
Based on Pre-trained Language Models (PLMs), event coreference resolution (ECR) systems have demonstrated outstanding performance in clustering coreferential events across documents. However, the state-of-the-art system exhibits an excessive reliance on the'triggers lexical matching' spurious pattern in the input mention pair text. We formalize the decision-making process of the baseline ECR system using a Structural Causal Model (SCM), aiming to identify spurious and causal associations (i.e., rationales) within the ECR task. Leveraging the debiasing capability of counterfactual data augmentation, we develop a rationale-centric counterfactual data augmentation method with LLM-in-the-loop. This method is specialized for pairwise input in the Figure 1: The distribution of'triggers lexical matching' ECR system, where we conduct direct interventions in mention pairs from ECB+ training set, along with a on triggers and context to mitigate the false negative example from Held et al.'s system which spurious association while emphasizing the causation.
The Next CSR Challenge: Engaging in a Dialogue About Artificial Intelligence
Products using artificial intelligence (AI) are creeping into our lives: in the home, online, at work, in the marketplace, in the doctor's office. What if AI gets carried away, if it hasn't already? Plenty of movies and books that contemplate this. While those scenarios may be easy to dismiss, the consequences of what could happen are not. Unless it's fully grasped for its benefits, companies that use AI are putting their brands at risk if society doesn't adequately understand how it benefits from the technology.